COMPARISON OF PERFORMANCE OF DIFFERENT K VALUES WITH K-FOLD CROSS VALIDATION IN A GRAPH-BASED LEARNING MODEL FOR IncRNA-DISEASE PREDICTION

نویسندگان

چکیده

In machine learning, the k value in k-fold cross-validation method significantly affects performance of created model. studies that have been done, is usually taken as five or ten because these two values are thought to produce average estimates. However, there no official rule. It has observed few carried out use different training models. this study, a evaluation was performed on IncRNA-disease model using various (2, 3, 4, 5, 6, 7, 8, 9, and 10) datasets. The obtained results were compared most suitable for determined. future studies, it aimed carry more comprehensive study by increasing number data sets.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The 'K' in K-fold Cross Validation

The K-fold Cross Validation (KCV) technique is one of the most used approaches by practitioners for model selection and error estimation of classifiers. The KCV consists in splitting a dataset into k subsets; then, iteratively, some of them are used to learn the model, while the others are exploited to assess its performance. However, in spite of the KCV success, only practical rule-of-thumb me...

متن کامل

A K-fold Averaging Cross-validation Procedure.

Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate 'optimal' model from each hold-out fold and average the K candidate 'optimal' models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significan...

متن کامل

Estimators of Variance for K-Fold Cross-Validation

1 Motivations In machine learning, the standard measure of accuracy for models is the prediction error (PE), i.e. the expected loss on future examples. We consider here the i.i.d. regression or classification setups, where future examples are assumed to be independently sampled from the distribution that generated the training set. When the data distribution is unknown, PE cannot be computed. T...

متن کامل

Improving Adaptive Boosting with k-Cross-Fold Validation

As seen in the bibliography, Adaptive Boosting (Adaboost) is one of the most known methods to increase the performance of an ensemble of neural networks. We introduce a new method based on Adaboost where we have applied Cross-Validation to increase the diversity of the ensemble. We have used CrossValidation over the whole learning set to generate an specific training set and validation set for ...

متن کامل

investigating the feasibility of a proposed model for geometric design of deployable arch structures

deployable scissor type structures are composed of the so-called scissor-like elements (sles), which are connected to each other at an intermediate point through a pivotal connection and allow them to be folded into a compact bundle for storage or transport. several sles are connected to each other in order to form units with regular polygonal plan views. the sides and radii of the polygons are...

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: K?rklareli university journal of engineering and science

سال: 2023

ISSN: ['2458-7494', '2458-7613']

DOI: https://doi.org/10.34186/klujes.1248062